9
Observation 2: Training Time
The data on training time reveals that ResNet50 has the shortest training time among the three models, taking
approximately 37.3 minutes to complete training. In contrast, ResNet152 has the longest training time, requiring
about 65.1 minutes, while ResNet101 falls in between with a training time of around 59 minutes. This
discrepancy in training times could be attributed to the differences in the architecture and complexity of the
models. Despite the differences in training time, ResNet50 manages to achieve the highest testing accuracy,
indicating a potentially efficient trade-off between training time and performance.
Overfitting
In summary, while the testing accuracies for all three models are high, a definitive evaluation of overfitting is
challenging due to the absence of training accuracy values. Overfitting typically occurs when a model performs
significantly better on the training data than on unseen testing data. To thoroughly assess overfitting, it's
important to compare the training and testing accuracy values and consider whether the models are learning
noise from the training data that doesn't generalize well to new data.
7. YOLO FOR SPORTS IMAGE CLASSIFICATION
We have trained YOLO8s model which is very popular for image classification in field of deep
learning and got some astonishing observation from the data which are as follow:
Training Progress: The training loss ("train/loss") steadily decreases as the number of epochs
increases. This suggests that the model is effectively learning from the training data and optimizing
its parameters to minimize the loss.
Validation Performance: The validation loss ("Val/loss") also decreases initially, indicating that the
model is improving its generalization to unseen data. However, the decrease starts to plateau after a
certain number of epochs, which could indicate that the model's improvement is slowing down.
Accuracy: The top-1 accuracy on the training data ("metrics/accuracy_top1") increases consistently
with each epoch. This indicates that the model is becoming more accurate in predicting the correct
class with the highest confidence.
Top-5 Accuracy: The top-5 accuracy on the training data ("metrics/accuracy_top5") is consistently
high, close to 1. This suggests that even if the model's top prediction is not always accurate, it is
often able to include the correct class within its top 5 predictions.
Learning Rate: The learning rate values ("lr/pg0," "lr/pg1," "lr/pg2") are decreasing with each
epoch. This suggests that a learning rate schedule is in place, which is a common practice in training
to fine-tune the model as optimization progresses.
Generalization and Overfitting: The model's performance on the validation data is comparable to
its performance on the training data, which is a positive sign of good generalization. However,
monitoring the gap between training and validation metrics over more epochs is important to ensure
that the model doesn't start overfitting the training data.